动作识别是通过广泛应用程序进行视频理解的重要任务。但是,开发有效的动作识别解决方案通常需要进行广泛的工程工作,以构建和测试模块及其超参数的不同组合。在此演示中,我们提出了Autovideo,这是一种用于自动视频动作识别的Python系统。Autovideo的特征是1)标准管道语言之后的高度模块化和可扩展的基础架构,2)管道构造的原始列表,3)数据驱动的调谐器来保存管道调整的努力,4)易于使用图形用户界面(GUI)。Autovideo在MIT许可证上发行,网址为https://github.com/datamllab/autovideo
translated by 谷歌翻译
Social networking sites, blogs, and online articles are instant sources of news for internet users globally. However, in the absence of strict regulations mandating the genuineness of every text on social media, it is probable that some of these texts are fake news or rumours. Their deceptive nature and ability to propagate instantly can have an adverse effect on society. This necessitates the need for more effective detection of fake news and rumours on the web. In this work, we annotate four fake news detection and rumour detection datasets with their emotion class labels using transfer learning. We show the correlation between the legitimacy of a text with its intrinsic emotion for fake news and rumour detection, and prove that even within the same emotion class, fake and real news are often represented differently, which can be used for improved feature extraction. Based on this, we propose a multi-task framework for fake news and rumour detection, predicting both the emotion and legitimacy of the text. We train a variety of deep learning models in single-task and multi-task settings for a more comprehensive comparison. We further analyze the performance of our multi-task approach for fake news detection in cross-domain settings to verify its efficacy for better generalization across datasets, and to verify that emotions act as a domain-independent feature. Experimental results verify that our multi-task models consistently outperform their single-task counterparts in terms of accuracy, precision, recall, and F1 score, both for in-domain and cross-domain settings. We also qualitatively analyze the difference in performance in single-task and multi-task learning models.
translated by 谷歌翻译
The vision community has explored numerous pose guided human editing methods due to their extensive practical applications. Most of these methods still use an image-to-image formulation in which a single image is given as input to produce an edited image as output. However, the problem is ill-defined in cases when the target pose is significantly different from the input pose. Existing methods then resort to in-painting or style transfer to handle occlusions and preserve content. In this paper, we explore the utilization of multiple views to minimize the issue of missing information and generate an accurate representation of the underlying human model. To fuse the knowledge from multiple viewpoints, we design a selector network that takes the pose keypoints and texture from images and generates an interpretable per-pixel selection map. After that, the encodings from a separate network (trained on a single image human reposing task) are merged in the latent space. This enables us to generate accurate, precise, and visually coherent images for different editing tasks. We show the application of our network on 2 newly proposed tasks - Multi-view human reposing, and Mix-and-match human image generation. Additionally, we study the limitations of single-view editing and scenarios in which multi-view provides a much better alternative.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Linear classifier probes are frequently utilized to better understand how neural networks function. Researchers have approached the problem of determining unit importance in neural networks by probing their learned, internal representations. Linear classifier probes identify highly selective units as the most important for network function. Whether or not a network actually relies on high selectivity units can be tested by removing them from the network using ablation. Surprisingly, when highly selective units are ablated they only produce small performance deficits, and even then only in some cases. In spite of the absence of ablation effects for selective neurons, linear decoding methods can be effectively used to interpret network function, leaving their effectiveness a mystery. To falsify the exclusive role of selectivity in network function and resolve this contradiction, we systematically ablate groups of units in subregions of activation space. Here, we find a weak relationship between neurons identified by probes and those identified by ablation. More specifically, we find that an interaction between selectivity and the average activity of the unit better predicts ablation performance deficits for groups of units in AlexNet, VGG16, MobileNetV2, and ResNet101. Linear decoders are likely somewhat effective because they overlap with those units that are causally important for network function. Interpretability methods could be improved by focusing on causally important units.
translated by 谷歌翻译
原则上,将变异自动编码器(VAE)应用于顺序数据提供了一种用于控制序列生成,操纵和结构化表示学习的方法。但是,训练序列VAE具有挑战性:自回归解码器通常可以解释数据而无需使用潜在空间,即后置倒塌。为了减轻这种情况,最新的模型通过将均匀的随机辍学量应用于解码器输入来削弱强大的解码器。从理论上讲,我们表明,这可以消除解码器输入提供的点式互信息,该信息通过利用潜在空间来补偿。然后,我们提出了一种对抗性训练策略,以实现基于信息的随机辍学。与标准文本基准数据集上的均匀辍学相比,我们的目标方法同时提高了序列建模性能和潜在空间中捕获的信息。
translated by 谷歌翻译
用于分析化学数据的计算技术的引入引起了对生物系统的分析研究,称为“生物信息学”。生物信息学的一个方面是使用机器学习(ML)技术在各种情况下检测多变量趋势。最紧迫的情况之一是预测血脑屏障(BBB)的渗透性。治疗中枢神经系统疾病的新药物的开发由于在血脑屏障中的渗透功效不佳而带来了独特的挑战。在这项研究中,我们旨在通过分析化学特征的ML模型来减轻此问题。这样做:(i)给出了相关的生物系统和过程以及用例的概述。 (ii)第二,对检测BBB渗透性的现有计算技术进行了深入的文献综述。从那里开始,确定了跨电流技术的一个方面,并提出了解决方案。 (iii)最后,开发,测试和反映了通过被动扩散在整个BBB上具有确定特征的药物渗透性的两部分,以量化具有定义特征的药物的渗透性。使用数据集进行的测试和验证确定预测LOGBB模型的平方误差约为0.112单位,而神经炎症模型的均方误差约为0.3个单位,胜过所有相关研究。
translated by 谷歌翻译
大型预估计模型(例如GPT-3)取得了显着的性能,在训练过程中暴露于大量数据上。类似地,将如此大型模型提炼成紧凑的模型以进行有效的部署,也需要大量(标记或未标记的)培训数据。在本文中,我们提出了培训高质量紧凑型模型的教师指导培训(TGT)框架,该模型利用了预验证的生成模型获得的知识,同时避免了大量数据的需求。 TGT利用了教师获得基础数据域的良好表示的事实,该事实通常对应于比输入空间要低得多的尺寸歧管。此外,我们可以使用老师通过采样或基于梯度的方法来更有效地探索输入空间。因此,使TGT对于有限的数据或长尾设置特别有吸引力。我们正式在我们的概括范围内正式捕获了所提出的数据域探索的好处。我们发现TGT可以提高几个图像分类基准以及一系列文本分类和检索任务的准确性。
translated by 谷歌翻译
问题应答系统这些天通常使用基于模板的语言生成。虽然足够适用于特定于域的任务,但这些系统对于域无关的系统来说太限性和预定义。本文提出了一个输出全长答案的系统给出一个问题和提取的事实答案(如命名实体等短跨度)作为输入。我们的系统使用选区和依赖性解析问题的树木。基于变压器的语法纠错模型Gector(2020)用作后处理步骤,以便更好流畅。我们将系统与(i)修改的指针生成器(SOTA)和(ii)微调对话框进行了比较。我们还通过更好的结果测试我们的方法(是 - 否)问题的方法。我们的模型比最先进的(SOTA)方法产生准确和流畅的答案。评估是在NewsQA和Squad数据集上完成的,分别增加0.4和0.9个百分点的速度分数。与SOTA相比,推理时间也减少了85 \%。用于我们评估的改进数据集将作为研究贡献的一部分发布。
translated by 谷歌翻译
近似组合优化已成为量子计算机最有前途的应用领域之一,特别是近期的应用领域之一。在这项工作中,我们专注于求解最大切割问题的量子近似优化算法(QAOA)。具体而言,我们解决了QAOA中的两个问题,如何选择初始参数,以及如何随后培训参数以找到最佳解决方案。对于前者来说,我们将图形神经网络(GNN)作为QAOA参数的初始化例程,在热启动技术中添加到文献。我们不仅显示了GNN方法概括,而且不仅可以增加图形尺寸,还可以增加图形大小,这是其他热启动技术无法使用的功能。为了培训QAOA,我们测试了几个优化员以获得MaxCut问题。这些包括在文献中提出的量子感知/不可知论者,我们还包括机器学习技术,如加强和元学习。通过纳入这些初始化和优化工具包,我们展示了如何培训QAOA作为端到端可分散的管道。
translated by 谷歌翻译